3 research outputs found

    Simultaneous Bilinguals’ Comprehension of Accented Speech

    Get PDF
    L2-accented speech recognition has typically been studied with monolingual listeners or late L2-learners, but simultaneous bilinguals may have a different experience: their two phonologies offer flexibility in phonological-lexical mapping (Samuel and Larraza, 2015), which may be advantageous. On the other hand, the two languages cause greater lexical competition (Marian & Spivey, 2003), which may impede successful L2-accented speech recognition. The competition between a bilinguals’ two languages is the oft-cited explanation, for example, as to why bilinguals underperform monolinguals in native-accented speech-in-noise tasks (Rogers et al., 2006). To investigate the effect of bilingualism on L2-accented speech recognition, the current studies compare monolingual and simultaneous bilingual listeners in three separate experiments. In the first study, both groups repeated sentences produced by speakers of Mandarin-accented English whose English proficiencies varied. In the second study, the stimuli were presented in varying levels and types of noise, and a native-accented speaker was included. In each of these first two studies, the sentences were semantically anomalous (i.e., nonsensical). In the third study, the stimuli were meaningful sentences, presented in a single noise condition, and spoken by either a native speaker or an L2-accented speaker. Mixed effects models revealed differences in L2-accented speech recognition measures driven by listeners’ language backgrounds only in Experiments 2 and 3; in Experiment 1, performance between groups was statistically identical. Results in Experiments 2 and 3 also replicated the prior finding that bilinguals perform worse for native-accented speech in noise. We propose that neither a flexible phonological-lexical mapping system nor increased lexical competition can alone sufficiently explain the deficit (relative to monolinguals) that simultaneous bilinguals exhibit when faced with L2-accented speech in real-world listening conditions. We discuss the possible implications of processing capacity and cognitive load, and suggest that these two factors are more likely to contribute to experimental outcomes. Future studies with pupillometry to explore these hypotheses are also discussed

    The Effects of Language Background and Foreign Accent on Listening Comprehension

    Full text link
    The act of listening to a linguistic signal is an involved process, and, rarely occurs in absolute silence. A person trying to listen and comprehend speech is likely in an environment that has some sort of additional noise: white noise from a fan, passing traffic, construction, or just other talkers. Each of these additional auditory signals creates an unfavorable environment for the listener who is trying to capture the target signal. Research has been able to quantify and describe the effects of noise on the comprehension of linguistic signals, and has also shown that that bilinguals and monolinguals — though their performance is indistinguishable in quiet conditions — are known to be differentially affected by noise: bilinguals perform significantly worse in adverse listening conditions when tasked with comprehend a linguistic signal. What is yet to be established is how a signal with intrinsic, phonological variation differentially affects monolinguals and bilinguals. This study is a small-scale pilot that investigates this question: what bearing does bilingualism have on the comprehension of foreign-accented speech in quiet and in noise? Stimuli include sentences spoken in English, with five different accents: Neutral English (the English typical of the NYC area), Latin American Spanish English, Mandarin English, Italian English, and Indian English. A true-false verification task is used to assess the participants’ comprehension of the sentences, which are auditorily delivered such that no two sentences with the same accent are heard consecutively. All five accents are heard in both quiet and in noise, in two separate blocks. Accuracy and reaction times are analyzed

    Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses

    Get PDF
    Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions
    corecore